Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
学习不平衡是数据挖掘的基本挑战,在每个课程中,培训样本的比例不成比例。过度采样是通过为少数族裔生成合成样本来解决不平衡学习的有效技术。尽管已经提出了许多过采样算法,但它们在很大程度上依赖启发式方法,这可能是最佳选择的,因为我们可能需要针对不同数据集和基本分类器的不同采样策略,并且无法直接优化性能指标。在此激励的情况下,我们研究了开发一种基于学习的过采样算法以优化分类性能,这是一项艰巨的任务,因为庞大和等级的决策空间。在高水平上,我们需要确定要生成多少合成样品。在低级别,我们需要确定合成样品的位置,这取决于高级决策,因为样品的最佳位置在不同数量的样品中可能有所不同。为了应对挑战,我们提出了一种自动采样算法,可以共同优化不同级别的决策。由Smote〜 \ cite {Chawla2002smote}的成功的动机及其扩展,我们将生成过程作为Markov决策过程(MDP),由三个级别的策略组成,以在Smote搜索空间内生成合成样本。然后,我们利用深层的层次加强学习来优化验证数据的性能指标。在六个现实世界数据集上进行的广泛实验表明,自动变量极大地超过了最新的重新采样算法。该代码在https://github.com/daochenzha/autosmote上
translated by 谷歌翻译
嵌入学习是深度建议模型中的重要技术,可以将分类特征映射到密集的矢量。但是,嵌入表通常需要大量参数,这些参数成为存储和效率瓶颈。已经采用了分布式培训解决方案将嵌入表分配到多个设备中。但是,如果不仔细分区,则嵌入表很容易导致失衡。这是名为“嵌入桌碎片”的分布式系统的重大设计挑战,即,我们应该如何对嵌入表进行分配以平衡跨设备的成本,这是一项非平凡的任务,因为1)很难有效,精确地衡量成本,和2)已知分区问题是NP-HARD。在这项工作中,我们在Meta中介绍了新颖的实践,即Autoshard,该实践使用神经成本模型直接预测多桌成本和利用深度强化学习以解决分区问题。开源的大规模合成数据集和Meta生产数据集的实验结果证明了Autoshard的优越性优于启发式方法。此外,Autoshard的学习政策可以转移到具有不同数量的表和不同表格比率的碎片任务中,而无需进行任何微调。此外,Autoshard可以在几秒钟内有效地将数百张桌子碎片。 Autoshard的有效性,可转移性和效率使其适合生产使用。我们的算法已在元生产环境中部署。可以在https://github.com/daochenzha/autoshard上获得原型
translated by 谷歌翻译
在边缘设备上部署深层神经网络〜(DNNS)为现实世界任务提供了有效的解决方案。边缘设备已用于在不同域中有效地收集大量数据。DNN是用于数据处理和分析的有效工具。但是,由于计算资源和内存有限,在边缘设备上设计DNN是具有挑战性的。为了应对这一挑战,我们演示了最大78000 DNN加速器上边缘设备的对象检测系统。它分别与摄像头和用于图像采集和检测展览的LCD显示器集成了启动DNN的推断。床是一种简洁,有效且详细的解决方案,包括模型培训,量化,合成和部署。实验结果表明,床可以通过300 kb微小的DNN模型产生准确的检测,该模型仅需91.9 ms的推理时间和1.845 MJ的能量。
translated by 谷歌翻译
我们研究了时间序列分类(TSC),是时间序列数据挖掘的根本任务。事先从两个主要方向接近TSC:(1)基于相似性的方法,用于基于最近邻居的时间系列,(2)直接以数据驱动的方式学习分类表示的深度学习模型。在这两条研究线内的不同工作机制激励,我们的目的是以与共同模拟时间序列相似度的方式连接它们并学习表示。这是一个具有挑战性的任务,因为目前尚不清楚我们应该如何有效地利用相似性信息。为了解决挑战,我们提出了相似度感知的时序分类(SIMTSC),这是一种概念上简单且一般的框架,其模型与图形神经网络(GNN)的相似性信息。具体地,我们将TSC标记为图中的节点分类问题,其中节点对应于时间序列,并且链路对应于配对相似性。我们进一步设计了一种图形施工策略和具有负采样的批量培训算法,以提高培训效率。我们将SIMTSC与RESENT作为骨干网和动态时间翘曲(DTW)作为相似度测量。在完整的UCR数据集和几个多变量数据集上的广泛实验证明了在监督和半监督设置中将相似信息纳入深度学习模型的有效性。我们的代码可在https://github.com/daochenzha/simtsc提供
translated by 谷歌翻译
机器学习模型在高赌注应用中变得普遍存在。尽管在绩效方面有明显的效益,但该模型可以表现出对少数民族群体的偏见,并导致决策过程中的公平问题,导致对个人和社会的严重负面影响。近年来,已经开发了各种技术来减轻机器学习模型的偏差。其中,加工方法已经增加了社区的关注,在模型设计期间直接考虑公平,以诱导本质上公平的模型,从根本上减轻了产出和陈述中的公平问题。在本调查中,我们审查了加工偏置减缓技术的当前进展。基于在模型中实现公平的地方,我们将它们分类为明确和隐性的方法,前者直接在培训目标中纳入公平度量,后者重点介绍精炼潜在代表学习。最后,我们在讨论该社区中的研究挑战来讨论调查,以激励未来的探索。
translated by 谷歌翻译
动作识别是通过广泛应用程序进行视频理解的重要任务。但是,开发有效的动作识别解决方案通常需要进行广泛的工程工作,以构建和测试模块及其超参数的不同组合。在此演示中,我们提出了Autovideo,这是一种用于自动视频动作识别的Python系统。Autovideo的特征是1)标准管道语言之后的高度模块化和可扩展的基础架构,2)管道构造的原始列表,3)数据驱动的调谐器来保存管道调整的努力,4)易于使用图形用户界面(GUI)。Autovideo在MIT许可证上发行,网址为https://github.com/datamllab/autovideo
translated by 谷歌翻译
We study the problem of estimating latent population flows from aggregated count data. This problem arises when individual trajectories are not available due to privacy issues or measurement fidelity. Instead, the aggregated observations are measured over discrete-time points, for estimating the population flows among states. Most related studies tackle the problems by learning the transition parameters of a time-homogeneous Markov process. Nonetheless, most real-world population flows can be influenced by various uncertainties such as traffic jam and weather conditions. Thus, in many cases, a time-homogeneous Markov model is a poor approximation of the much more complex population flows. To circumvent this difficulty, we resort to a multi-marginal optimal transport (MOT) formulation that can naturally represent aggregated observations with constrained marginals, and encode time-dependent transition matrices by the cost functions. In particular, we propose to estimate the transition flows from aggregated data by learning the cost functions of the MOT framework, which enables us to capture time-varying dynamic patterns. The experiments demonstrate the improved accuracy of the proposed algorithms than the related methods in estimating several real-world transition flows.
translated by 谷歌翻译
Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
translated by 谷歌翻译
Swarm learning (SL) is an emerging promising decentralized machine learning paradigm and has achieved high performance in clinical applications. SL solves the problem of a central structure in federated learning by combining edge computing and blockchain-based peer-to-peer network. While there are promising results in the assumption of the independent and identically distributed (IID) data across participants, SL suffers from performance degradation as the degree of the non-IID data increases. To address this problem, we propose a generative augmentation framework in swarm learning called SL-GAN, which augments the non-IID data by generating the synthetic data from participants. SL-GAN trains generators and discriminators locally, and periodically aggregation via a randomly elected coordinator in SL network. Under the standard assumptions, we theoretically prove the convergence of SL-GAN using stochastic approximations. Experimental results demonstrate that SL-GAN outperforms state-of-art methods on three real world clinical datasets including Tuberculosis, Leukemia, COVID-19.
translated by 谷歌翻译